Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
@nlpjs/lang-it
Advanced tools
You can install @nlpjs/lang-it:
npm install @nlpjs/lang-it
Normalization of a text converts it to lowercase and remove decorations of characters.
const { NormalizerIt } = require('@nlpjs/lang-it');
const normalizer = new NormalizerIt();
const input = 'Questo dòvrebbe essere normalizzato';
const result = normalizer.normalize(input);
console.log(result);
// output: questo dovrebbe essere normalizzato
Tokenization splits a sentence into words.
const { TokenizerIt } = require('@nlpjs/lang-it');
const tokenizer = new TokenizerIt();
const input = 'Questo dovrebbe essere tokenizzato';
const result = tokenizer.tokenize(input);
console.log(result);
// output: [ 'Questo', 'dovrebbe', 'essere', 'tokenizzato' ]
Tokenizer can also normalize the sentence before tokenizing, to do that provide a true as second argument to the method tokenize
const { TokenizerIt } = require('@nlpjs/lang-it');
const tokenizer = new TokenizerIt();
const input = 'Questo dovrebbe essere tokenizzato';
const result = tokenizer.tokenize(input, true);
console.log(result);
// output: [ 'questo', 'dovrebbe', 'essere', 'tokenizzato' ]
Using the class StopwordsIt you can identify if a word is an stopword:
const { StopwordsIt } = require('@nlpjs/lang-it');
const stopwords = new StopwordsIt();
console.log(stopwords.isStopword('uno'));
// output: true
console.log(stopwords.isStopword('sviluppatore'));
// output: false
Using the class StopwordsIt you can remove stopwords form an array of words:
const { StopwordsIt } = require('@nlpjs/lang-it');
const stopwords = new StopwordsIt();
console.log(stopwords.removeStopwords(['ho', 'visto', 'uno', 'sviluppatore']));
// output: [ 'visto', 'sviluppatore' ]
Using the class StopwordsIt you can restart it dictionary and build it from another set of words:
const { StopwordsIt } = require('@nlpjs/lang-it');
const stopwords = new StopwordsIt();
stopwords.dictionary = {};
stopwords.build(['ho', 'visto']);
console.log(stopwords.removeStopwords(['ho', 'visto', 'uno', 'sviluppatore']));
// output: [ 'uno', 'sviluppatore' ]
An stemmer is an algorithm to calculate the stem (root) of a word, removing affixes.
You can stem one word using method stemWord:
const { StemmerIt } = require('@nlpjs/lang-it');
const stemmer = new StemmerIt();
const input = 'svilupp';
console.log(stemmer.stemWord(input));
// output: program
You can stem an array of words using method stem:
const { StemmerIt } = require('@nlpjs/lang-it');
const stemmer = new StemmerIt();
const input = ['ho', 'visto', 'uno', 'sviluppatore'];
console.log(stemmer.stem(input));
// outuput: [ 'ho', 'vist', 'uno', 'svilupp' ]
As you can see, stemmer does not do internal normalization, so words with uppercases will remain uppercased. Also, stemmer works with lowercased affixes, so sviluppatore will be stemmed as svilupp but SVILUPPATORE will not be changed.
You can tokenize and stem a sentence, including normalization, with the method tokenizeAndStem:
const { StemmerIt } = require('@nlpjs/lang-it');
const stemmer = new StemmerIt();
const input = 'Ho visto uno SVILUPPATORE';
console.log(stemmer.tokenizeAndStem(input));
// output: [ 'ho', 'vist', 'uno', 'svilupp' ]
When calling tokenizeAndStem method from the class StemmerIt, the second parameter is a boolean to set if the stemmer must keep the stopwords (true) or remove them (false). Before using it, the stopwords instance must be set into the stemmer:
const { StemmerIt, StopwordsIt } = require('@nlpjs/lang-it');
const stemmer = new StemmerIt();
stemmer.stopwords = new StopwordsIt();
const input = 'Ho visto uno sviluppatore';
console.log(stemmer.tokenizeAndStem(input, false));
// output: [ 'vist', 'svilupp' ]
To use sentiment analysis you'll need to create a new Container and use the plugin LangIt, because internally the SentimentAnalyzer class try to retrieve the normalizer, tokenizer, stemmmer and sentiment dictionaries from the container.
const { Container } = require('@nlpjs/core');
const { SentimentAnalyzer } = require('@nlpjs/sentiment');
const { LangIt } = require('@nlpjs/lang-it');
(async () => {
const container = new Container();
container.use(LangIt);
const sentiment = new SentimentAnalyzer({ container });
const result = await sentiment.process({
locale: 'it',
text: 'amore per i gatti',
});
console.log(result.sentiment);
})();
// output:
// {
// score: 0.25,
// numWords: 4,
// numHits: 2,
// average: 0.0625,
// type: 'pattern',
// locale: 'it',
// vote: 'positive'
// }
The output of the sentiment analysis includes:
const { containerBootstrap } = require('@nlpjs/core');
const { Nlp } = require('@nlpjs/nlp');
const { LangIt } = require('@nlpjs/lang-it');
(async () => {
const container = await containerBootstrap();
container.use(Nlp);
container.use(LangIt);
const nlp = container.get('nlp');
nlp.settings.autoSave = false;
nlp.addLanguage('it');
// Adds the utterances and intents for the NLP
nlp.addDocument('it', 'Addio per ora', 'greetings.bye');
nlp.addDocument('it', 'arrivederci e stai attento', 'greetings.bye');
nlp.addDocument('it', 'molto bene a dopo', 'greetings.bye');
nlp.addDocument('it', 'devo andare', 'greetings.bye');
nlp.addDocument('it', 'ciao', 'greetings.hello');
// Train also the NLG
nlp.addAnswer('it', 'greetings.bye', 'fino alla prossima volta');
nlp.addAnswer('it', 'greetings.bye', 'A presto!');
nlp.addAnswer('it', 'greetings.hello', 'Ciao, come stai');
nlp.addAnswer('it', 'greetings.hello', 'Saluti!');
await nlp.train();
const response = await nlp.process('it', 'devo andare');
console.log(response);
})();
You can read the guide of how to contribute at Contributing.
Made with contributors-img.
You can read the Code of Conduct at Code of Conduct.
?
This project is developed by AXA Group Operations Spain S.A.
If you need to contact us, you can do it at the email opensource@axa.com
Copyright (c) AXA Group Operations Spain S.A.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FAQs
Core
The npm package @nlpjs/lang-it receives a total of 7,866 weekly downloads. As such, @nlpjs/lang-it popularity was classified as popular.
We found that @nlpjs/lang-it demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.